Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Answering complex logical queries on incomplete knowledge graphs is a challenging task, and has been widely studied. Embedding-based methods require training on complex queries, and cannot generalize well to out-of-distribution query structures. Recent work frames this task as an end-to-end optimization problem, and it only requires a pretrained link predictor. However, due to the exponentially large combinatorial search space, the optimal solution can only be approximated, limiting the final accuracy. In this work, we propose QTO (Query Tree Optimization) that can efficiently find the exact optimal solution. QTO finds the optimal solution by a forward-backward propagation on the tree-like computation graph, i.e., query tree. In particular, QTO utilizes the independence encoded in the query tree to reduce the search space, where only local computations are involved during the optimization procedure. Experiments on 3 datasets show that QTO obtains state-of-the-art performance on complex query answering, outperforming previous best results by an average of 22%. Moreover, QTO can interpret the intermediate solutions for each of the one-hop atoms in the query with over 90% accuracy.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In this work, we propose a semantic flow-guided two-stage framework for shape-aware face swapping, namely FlowFace. Unlike most previous methods that focus on transferring the source inner facial features but neglect facial contours, our FlowFace can transfer both of them to a target face, thus leading to more realistic face swapping. Concretely, our FlowFace consists of a face reshaping network and a face swapping network. The face reshaping network addresses the shape outline differences between the source and target faces. It first estimates a semantic flow (i.e., face shape differences) between the source and the target face, and then explicitly warps the target face shape with the estimated semantic flow. After reshaping, the face swapping network generates inner facial features that exhibit the identity of the source face. We employ a pre-trained face masked autoencoder (MAE) to extract facial features from both the source face and the target face. In contrast to previous methods that use identity embedding to preserve identity information, the features extracted by our encoder can better capture facial appearances and identity information. Then, we develop a cross-attention fusion module to adaptively fuse inner facial features from the source face with the target facial attributes, thus leading to better identity preservation. Extensive quantitative and qualitative experiments on in-the-wild faces demonstrate that our FlowFace outperforms the state-of-the-art significantly.
translated by 谷歌翻译
Recently, segmentation-based methods are quite popular in scene text detection, which mainly contain two steps: text kernel segmentation and expansion. However, the segmentation process only considers each pixel independently, and the expansion process is difficult to achieve a favorable accuracy-speed trade-off. In this paper, we propose a Context-aware and Boundary-guided Network (CBN) to tackle these problems. In CBN, a basic text detector is firstly used to predict initial segmentation results. Then, we propose a context-aware module to enhance text kernel feature representations, which considers both global and local contexts. Finally, we introduce a boundary-guided module to expand enhanced text kernels adaptively with only the pixels on the contours, which not only obtains accurate text boundaries but also keeps high speed, especially on high-resolution output maps. In particular, with a lightweight backbone, the basic detector equipped with our proposed CBN achieves state-of-the-art results on several popular benchmarks, and our proposed CBN can be plugged into several segmentation-based methods. Code will be available on https://github.com/XiiZhao/cbn.pytorch.
translated by 谷歌翻译
kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus. As a result, the constructed datastore is usually large and possibly redundant. In this paper, we investigate the interpretability issue of this approach: what knowledge does the NMT model need? We propose the notion of local correctness (LAC) as a new angle, which describes the potential translation correctness for a single entry and for a given neighborhood. Empirical study shows that our investigation successfully finds the conditions where the NMT model could easily fail and need related knowledge. Experiments on six diverse target domains and two language-pairs show that pruning according to local correctness brings a light and more explainable memory for kNN-MT domain adaptation.
translated by 谷歌翻译
图像的美学评估可以分为两种主要形式:数值评估和语言评估。照片的美学标题是已解决的审美语言评估的唯一任务。在本文中,我们提出了一项美学评估的新任务:图像的美学视觉和回答(AVQA)。如果我们提出图像美学问题,模型可以预测答案。我们使用\ textit {www.flickr.com}的图像。目标QA对由提出的美学属性分析算法产生。此外,我们引入了主观质量检查对,这些对从审美数字标签和来自大规模培训模型的情感分析转换。我们构建了第一个回答数据集AESVQA的审美视觉问题,其中包含72,168个高质量图像和324,756对美学问题。已经提出并证明了两种调整数据分布的方法,以提高现有模型的准确性。这是解决美学VQA任务并将主观性引入VQA任务的第一项工作。实验结果表明,我们的方法在这项新任务上的表现优于其他VQA模型。
translated by 谷歌翻译
随着移动摄影技术的迅速发展,主要的手机制造商正在争先恐后地提高设备的拍摄能力和软件的照片美化算法。但是,智能设备和算法的改进不能取代人类的主观摄影技术。在本文中,我们提出了图像的美学语言指导(ALG)。我们根据指导规则是基于摄影模板还是指导图像,将ALG分为ALG-T和ALG-I。无论是ALG-T还是ALG-I,我们都会从三个颜色,照明和图像组成的属性中指导摄影。输入图像和摄影模板或指导图像之间的三个属性的差异用自然语言描述,即美学自然语言指导(ALG)。另外,由于景观图像和肖像图像之间的照明和组成差异,我们将输入图像分为景观图像和肖像图像。 ALG-T和ALG-I分别针对两种类型的输入图像(景观图像和肖像图像)进行美学指导。
translated by 谷歌翻译
图像美学质量评估在过去十年中很受欢迎。除数值评估外,还提出了自然语言评估(美学字幕)来描述图像的一般美学印象。在本文中,我们提出了美学属性评估,即审美属性字幕,即评估诸如组成,照明使用和颜色布置之类的美学属性。标记美学属性的注释是一项非平凡的任务,该评论限制了相应数据集的规模。我们以半自动方式构建了一个名为DPC-CAPTIONSV2的新型数据集。知识从带有完整注释的小型数据集转移到摄影网站的大规模专业评论。 DPC-CAPTIONSV2的图像包含最多4个美学属性的注释:组成,照明,颜色和主题。然后,我们根据BUTD模型和VLPSA模型提出了一种新版本的美学多属性网络(AMANV2)。 AMANV2融合了带有完整注释的小规模PCCD数据集和带有完整注释的大规模DPCCAPTIONSV2数据集的混合物的功能。 DPCCAPTIONSV2的实验结果表明,我们的方法可以预测对4种美学属性的评论,这些评论比上一个Aman模型所产生的方法更接近美学主题。通过图像字幕的评估标准,专门设计的AMANV2模型对CNN-LSTM模型和AMAN模型更好。
translated by 谷歌翻译
多标签学习(MLL)从每个与多个标签相关联的示例中学习,其中每个培训示例的所有相关标签的高成本对于现实世界应用程序都有挑战。为了应对挑战,我们研究了单个阳性多标签学习(SPMLL),其中每个示例仅带有一个相关标签,并表明人们可以成功地学习一个理论上接地的多标签分类器,以解决该问题。在本文中,提出了一种名为{\提出的}的新型SPMLL方法,即提出了具有标签增强的单阳性多标签学习。具体而言,得出了无偏的风险估计器,可以保证该估计器大致融合到完全监督学习的最佳风险最小化器中,并表明每个实例的一个正标能够足以训练预测模型。然后,通过将潜在软标签恢复为标签增强过程,建立相应的经验风险估计器,其中潜在软标签的后验密度近似于通过推动模型对变异beta beta密度参数。基准数据集上的实验验证了所提出方法的有效性。
translated by 谷歌翻译